We develop a Synthetic Fusion Pyramid Network (SPF-Net) with a scale-aware loss function design for accurate crowd counting. Existing crowd-counting methods assume that the training annotation points were accurate and thus ignore the fact that noisy annotations can lead to large model-learning bias and counting error, especially for counting highly dense crowds that appear far away. To the best of our knowledge, this work is the first to properly handle such noise at multiple scales in end-to-end loss design and thus push the crowd counting state-of-the-art. We model the noise of crowd annotation points as a Gaussian and derive the crowd probability density map from the input image. We then approximate the joint distribution of crowd density maps with the full covariance of multiple scales and derive a low-rank approximation for tractability and efficient implementation. The derived scale-aware loss function is used to train the SPF-Net. We show that it outperforms various loss functions on four public datasets: UCF-QNRF, UCF CC 50, NWPU and ShanghaiTech A-B datasets. The proposed SPF-Net can accurately predict the locations of people in the crowd, despite training on noisy training annotations.
translated by 谷歌翻译
很少有射击学习(FSL)由于其在模型训练中的能力而无需过多的数据而引起了计算机视觉的越来越多的关注。 FSL具有挑战性,因为培训和测试类别(基础与新颖集)可能会在很大程度上多样化。传统的基于转移的解决方案旨在将从大型培训集中学到的知识转移到目标测试集中是有限的,因为任务分配转移的关键不利影响没有充分解决。在本文中,我们通过结合度量学习和通道注意的概念扩展了基于转移方法的解决方案。为了更好地利用特征主链提取的特征表示,我们提出了特定于类的通道注意(CSCA)模块,该模块通过分配每个类别的CSCA权重向量来学会突出显示每个类中的判别通道。与旨在学习全球班级功能的一般注意力模块不同,CSCA模块旨在通过非常有效的计算来学习本地和特定的特定功能。我们评估了CSCA模块在标准基准测试中的性能,包括Miniimagenet,Cifar-imagenet,Cifar-FS和Cub-200-200-2011。实验在电感和/跨域设置中进行。我们取得了新的最新结果。
translated by 谷歌翻译
第六版的AI城市挑战赛特别关注了两个领域的问题,在计算机视觉和人工智能的交集中具有巨大的解锁潜力:智能交通系统(ITS),以及实体和砂浆零售业务。 2022年AI City Challenge的四个挑战赛收到了来自27个国家 /地区254个团队的参与请求。轨道1地址的城市规模多目标多摄像机(MTMC)车辆跟踪。轨道2地址为基于天然语言的车辆轨道检索。 Track 3是一条全新的自然主义驾驶分析的轨道,该轨道是由安装在车辆内部的几台相机捕获的,该摄像头专注于驾驶员安全,而任务是对驾驶员的操作进行分类。 Track 4是另一个旨在仅使用单个视图摄像头实现零售商店自动结帐的新轨道。我们发布了两个基于不同方法的领导董事会成员提交,包括比赛的公共负责人委员会,不允许使用外部数据,以及用于所有提交结果的总管委员会。参与团队的最高表现建立了强大的基线,甚至超过了拟议的挑战赛中的最先进。
translated by 谷歌翻译
用于将音频信号的光谱表示转换为波形的神经声学器是语音合成管道中的常用组件。它侧重于合成来自低维表示的波形,例如MEL-谱图。近年来,已经引入了不同的方法来开发这种声音。但是,评估这些新的声音仪并将其表达与以前的声学相比,它变得更具挑战性。为了解决这个问题,我们呈现VOCBENCH,这是一个框架,该框架是基于最先进的神经声码器的性能。 VOCBENCH使用系统研究来评估共享环境中的不同神经探测器,使它们能够进行公平比较。在我们的实验中,我们对所有神经副探测器的数据集,培训管道和评估指标使用相同的设置。我们执行主观和客观评估,以比较每个声码器沿不同的轴的性能。我们的结果表明,该框架能够为每种声学器提供竞争的疗效和合成样品的质量。 Vocebench框架可在https://github.com/facebookResearch/Vocoder-Benchmark中获得。
translated by 谷歌翻译
本文提出了平行残留的双融合特征金字塔网络(PRB-FPN),以快速准确地单光对象检测。特征金字塔(FP)在最近的视觉检测中被广泛使用,但是由于汇总转换,FP的自上而下的途径无法保留准确的定位。随着使用更多层的更深骨干,FP的优势被削弱了。此外,它不能同时准确地检测到小物体。为了解决这些问题,我们提出了一种新的并行FP结构,具有双向(自上而下和自下而上)的融合以及相关的改进,以保留高质量的特征以进行准确定位。我们提供以下设计改进:(1)具有自下而上的融合模块(BFM)的平行分歧FP结构,以高精度立即检测小物体和大对象。 (2)串联和重组(CORE)模块为特征融合提供了自下而上的途径,该途径导致双向融合FP,可以从低层特征图中恢复丢失的信息。 (3)进一步纯化核心功能以保留更丰富的上下文信息。自上而下和自下而上的途径中的这种核心净化只能在几次迭代中完成。 (4)将残留设计添加到核心中,导致了一个新的重核模块,该模块可以轻松训练和集成,并具有更深入或更轻的骨架。所提出的网络可在UAVDT17和MS COCO数据集上实现最新性能。代码可在https://github.com/pingyang1117/prbnet_pytorch上找到。
translated by 谷歌翻译
The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
An unbiased scene graph generation (SGG) algorithm referred to as Skew Class-balanced Re-weighting (SCR) is proposed for considering the unbiased predicate prediction caused by the long-tailed distribution. The prior works focus mainly on alleviating the deteriorating performances of the minority predicate predictions, showing drastic dropping recall scores, i.e., losing the majority predicate performances. It has not yet correctly analyzed the trade-off between majority and minority predicate performances in the limited SGG datasets. In this paper, to alleviate the issue, the Skew Class-balanced Re-weighting (SCR) loss function is considered for the unbiased SGG models. Leveraged by the skewness of biased predicate predictions, the SCR estimates the target predicate weight coefficient and then re-weights more to the biased predicates for better trading-off between the majority predicates and the minority ones. Extensive experiments conducted on the standard Visual Genome dataset and Open Image V4 \& V6 show the performances and generality of the SCR with the traditional SGG models.
translated by 谷歌翻译
With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few training examples. It has been a new trend exploring ICL to evaluate and extrapolate the ability of LLMs. In this paper, we aim to survey and summarize the progress, challenges, and future work in ICL. We first present a formal definition of ICL and clarify its correlation to related studies. Then, we organize and discuss advanced techniques of ICL, including training strategies, prompting strategies, and so on. Finally, we present the challenges of ICL and provide potential directions for further research. We hope our work can encourage more research on uncovering how ICL works and improving ICL in future work.
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译